4 research outputs found

    Improving large-scale k-nearest neighbor text categorization with label autoencoders

    Get PDF
    In this paper, we introduce a multi-label lazy learning approach to deal with automatic semantic indexing in large document collections in the presence of complex and structured label vocabularies with high inter-label correlation. The proposed method is an evolution of the traditional k-Nearest Neighbors algorithm which uses a large autoencoder trained to map the large label space to a reduced size latent space and to regenerate the predicted labels from this latent space. We have evaluated our proposal in a large portion of the MEDLINE biomedical document collection which uses the Medical Subject Headings (MeSH) thesaurus as a controlled vocabulary. In our experiments we propose and evaluate several document representation approaches and different label autoencoder configurations.Ministerio de Ciencia e Innovaci贸n | Ref. PID2020-113230RB-C2

    Early stopping by correlating online indicators in neural networks

    Get PDF
    Financiado para publicaci贸n en acceso aberto: Universidade de Vigo/CISUGinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigaci贸n Cient铆fica y T茅cnica y de Innovaci贸n 2013-2016/TIN2017-85160-C2-2-R/ES/AVANCES EN NUEVOS SISTEMAS DE EXTRACCION DE RESPUESTAS CON ANALISIS SEMANTICO Y APRENDIZAJE PROFUNDOinfo:eu-repo/grantAgreement/AEI/Plan Estatal de Investigaci贸n Cient铆fica y T茅cnica y de Innovaci贸n 2017-2020/PID2020-113230RB-C22/ES/SEQUENCE LABELING MULTITASK MODELS FOR LINGUISTICALLY ENRICHED NER: SEMANTICS AND DOMAIN ADAPTATION (SCANNER-UVIGO)In order to minimize the generalization error in neural networks, a novel technique to identify overfitting phenomena when training the learner is formally introduced. This enables support of a reliable and trustworthy early stopping condition, thus improving the predictive power of that type of modeling. Our proposal exploits the correlation over time in a collection of online indicators, namely characteristic functions for indicating if a set of hypotheses are met, associated with a range of independent stopping conditions built from a canary judgment to evaluate the presence of overfitting. That way, we provide a formal basis for decision making in terms of interrupting the learning process. As opposed to previous approaches focused on a single criterion, we take advantage of subsidiarities between independent assessments, thus seeking both a wider operating range and greater diagnostic reliability. With a view to illustrating the effectiveness of the halting condition described, we choose to work in the sphere of natural language processing, an operational continuum increasingly based on machine learning. As a case study, we focus on parser generation, one of the most demanding and complex tasks in the domain. The selection of cross-validation as a canary function enables an actual comparison with the most representative early stopping conditions based on overfitting identification, pointing to a promising start toward an optimal bias and variance control.Agencia Estatal de Investigaci贸n | Ref. TIN2017-85160-C2-2-RAgencia Estatal de Investigaci贸n | Ref. PID2020-113230RB-C22Xunta de Galicia | Ref. ED431C 2018/5

    Surfing the modeling of pos taggers in low-resource scenarios

    Get PDF
    The recent trend toward the application of deep structured techniques has revealed the limits of huge models in natural language processing. This has reawakened the interest in traditional machine learning algorithms, which have proved still to be competitive in certain contexts, particularly in low-resource settings. In parallel, model selection has become an essential task to boost performance at reasonable cost, even more so when we talk about processes involving domains where the training and/or computational resources are scarce. Against this backdrop, we evaluate the early estimation of learning curves as a practical mechanism for selecting the most appropriate model in scenarios characterized by the use of non-deep learners in resource-lean settings. On the basis of a formal approximation model previously evaluated under conditions of wide availability of training and validation resources, we study the reliability of such an approach in a different and much more demanding operational environment. Using as a case study the generation of pos taggers for Galician, a language belonging to the Western Ibero-Romance group, the experimental results are consistent with our expectations.Ministerio de Ciencia e Innovaci贸n | Ref. PID2020-113230RB-C21Ministerio de Ciencia e Innovaci贸n | Ref. PID2020-113230RB-C22Xunta de Galicia | Ref. ED431C 2020/1

    Absolute convergence and error thresholds in non-active adaptive sampling

    Get PDF
    Financiado para publicaci贸n en acceso aberto: Universidade de Vigo/CISUGNon-active adaptive sampling is a way of building machine learning models from a training data base which are supposed to dynamically and automatically derive guaranteed sample size. In this context and regardless of the strategy used in both scheduling and generating of weak predictors, a proposal for calculating absolute convergence and error thresholds is described. We not only make it possible to establish when the quality of the model no longer increases, but also supplies a proximity condition to estimate in absolute terms how close it is to achieving such a goal, thus supporting decision making for fine-tuning learning parameters in model selection. The technique proves its correctness and completeness with respect to our working hypotheses, in addition to strengthening the robustness of the sampling scheme. Tests meet our expectations and illustrate the proposal in the domain of natural language processing, taking the generation of part-of-speech taggers as case study.Agencia Estatal de Investigaci贸n | Ref. TIN2017-85160-C2-1-RAgencia Estatal de Investigaci贸n | Ref. TIN2017-85160-C2-2-RAgencia Estatal de Investigaci贸n | PID2020-113230RB-C21Agencia Estatal de Investigaci贸n | PID2020-113230RB-C22Xunta de Galicia | Ref. ED431C 2018/5
    corecore